Nachdiplom Lecture: Statistics Meets Optimization Some Comments on Relative Costs

نویسنده

  • Yuting Wei
چکیده

Directly solving the ordinary least squares problem will (in general) require O(nd) operations. From Table 5.1, the Gaussian sketch does not actually improve upon this scaling for unconstrained problems: when m d (as is needed in the unconstrained case), then computing the sketch SA requires O(nd) operations as well. If we compute sketches using the JLT, then this cost is reduced to O(nd log(d)) so that we do see some significant savings relative to OLS. There are other strategies, of course. In a statistical setting, in which the rows of (A, y) correspond to distinct samples, it is natural to consider a method based on sample splitting. That is, suppose that we do the following:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nachdiplom Lecture : Statistics meets Optimization Fall 2015 Lecture 1 – Tuesday , September 22

In the modern era of high-dimensional data, the interface of mathematical statistics and optimization has become an increasing vibrant area of research. The goal of these lectures is to touch on various evolving areas at this interface. Before going into the details proper, let’s consider some high-level ways in which the objectives of optimization can be influenced by underlying statistical ob...

متن کامل

Nachdiplom Lecture: Statistics Meets Optimization Lecture 2 – Tuesday, September 29 2.1 a General Theorem on Gaussian Random Projections

Let K be a subset of the Euclidean sphere S d−1. As seen in Lecture #1, in analyzing how well a given random projection matrix S ∈ R m×d preserves vectors in K, a central object is the random variable Z(K) = sup u∈K Su 2 2 m − 1. (2.1) Suppose that our goal is to establish that, for some δ ∈ (0, 1), we have Z(K) ≤ δ with high probability. How large must the projection dimension m be, as a funct...

متن کامل

Nachdiplom Lecture: Statistics Meets Optimization 4.1 Problem Set up and Motivation

Last time, we proved that a sketch dimension m % 1 δ2W (AK(xLS) ∩ Sn−1) is sufficient to ensure this property. A related but slightly different notion of approximation is that of solution approximation, in which we measure the quality in terms of some norm between x̂ and xLS. Defining the (semi)-norm ‖u‖A : = √ uTATAu/n, let us say that x̂ is a δ-accurate solution approximation if ‖xLS − x̂‖A ≤ δ....

متن کامل

Nachdiplom Lecture : Statistics meets Optimization Fall 2015 Lecture 9 – Tuesday , December 1

A random vector xi ∈ R is said to be drawn from a spiked covariance model if it can written in the form xi = F ∗ √ Γξi + wi where F ∗ ∈ Rd×r is a fixed matrix with orthonormal columns; Γ = diagγ1, . . . , γr} is a diagonal matrix with γ1 ≥ γ2 ≥ · · · ≥ γr > 0; ξi ∈ R is a zero-mean random vector with identity covariance, and wi is a zero-mean random vector, independent of ξi, and with identity ...

متن کامل

Nachdiplom Lecture: Statistics Meets Optimization 7.1 Introduction

In previous lectures, we have analyzed random forms of optimization problems, in which the randomness was injected (via random projection) for algorithmic reasons. On the other hand, in statistical problems—even without considering approximations—the starting point is a random instance of an optimization problem. To be more concrete, suppose that we are interested in estimating some parameter θ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015